注意・知覚統合
Attention and Perceptual Integration
P2-2-160
周辺視ディスプレイと身体動揺を利用した歩行誘導と知覚特性の理解
Elucidation of Perceptual Characteristics of Pedestrian Guidance Using Peripheral-Vision-Display and Body Sway

○渡邊紀文1, 森文彦2, 大森隆司2
○Norifumi Watanabe1, Fumihiko Mori2, Takashi Omori2
東京工科大・コンピュータサイエンス1, 玉川大学脳科学研究所2
School of ComputerScience, Tokyo University of Technology, Tokyo1, Brain Science Institute, Tamagawa University, Tokyo2

In the daily act of walking, we take in large amounts of sensor information through visual, auditory, tactile, and olfactory channels (among others), and decide how to act. An important information source in these action decisions is explicit visual information, such as signs or arrows. However, in crowded situations, or when avoiding danger, it is difficult to recognize relevant signs, and it may become difficult to take appropriate action. In order to guide ambulatory behavior more effectively, we considered the use of an unconscious or reflex-based guidance method in addition to the usual visual and somatic action signs.The sensation of visual self-motion is referred to as vection. Vection is the perception of self-motion produced by optical flow, without actual motion. When vection occurs, we correct our posture to compensate for the perceived self-motion. Thus, the body-motion illusion caused by vection could be sufficient to induce a reflexive change in gaze direction and an unconscious modification of body motion direction.Muscle stimulation through vibration has also been found to influence posture. Specifically, a vibration device attached to a subject's leg destabilizes somatic sensation and causes the body to sway. When this occurs, our sense of equilibrium transfers dependency on body sway to the available visual input.It is possible that the above reflexes could safely deliver a level of sensation sufficient for the unconscious guidance of walking direction. In this study, we experimented with vision-guided walking direction control by inducing subjects to shift their gaze direction using optical flow in peripheral visual field. We confirmed that a shift in a subject's walking direction could be induced by a combination of optical flow and vibratory stimulus on his legs. We propose a switching mechanism for visual and somatosensory input based on inducement timing.
P2-2-161
言語の意味内容と聴覚性注意機能の相互作用の脳基盤
Emotion and attention interactions in social cognition: brain regions involved in processing auditory emotional semantic contents

○岩白訓周1, 夏堀龍暢1, 青木悠太1, 八幡憲明1, 五ノ井渉2, 佐々木弘樹2, 國松聡2, 笠井清登1, 山末英典1
○Norichika Iwashiro1, Tatsunobu Natsubori1, Yuta Aoki1, Noriaki Yahata1, Wataru Gonoi2, Hiroki Sasaki2, Akira Kunimatsu2, Kiyoto Kasai1, Hidenori Yamasue1
東京大学大学院医学系研究科脳神経医学精神医学分野1, 東京大学大学院放射線医学分野2
Department of Neuropsychiatry, Graduate School of Medicine, University of Tokyo, Tokyo1, Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo2

Detection of potential threats may occur even when they are not initially in the focus of attention, eliciting enhanced perceptual analysis and reorienting of attention. Modulation of sensory processing by affective signals has been found in humans and monkeys. It has been shown that Emotional word or face on visual attention and emotional prosody on auditory attention have such effect, but it is not known whether such effects of emotional word on auditory attention exist. Considering the importance of conversation in human communication, emotional word should have such effect on auditory attention. So, we developed a psychological task to address the issue, and applied it in 24 healthy adults preliminary. To manipulate attention orthogonally to emotional words, we used a dichotic listening paradigm in which either of negative or positive or neutral words was presented to one ear and neutral word was presented to another ear simultaneously. Participants selectively attended to either left or right ear and performed a word-recognition heard on the target side. Consistent with previous findings that emotional word or face interfered with visual attention and emotional prosody did with auditory attention, we clarified the reaction time was significantly longer when negative words, but not positive ones, were presented than that when only neutral ones were presented independently of the attention effect even if negative words were not focused of auditory attention. We will show the results of functional magnetic resonance imaging study utilizing the same task, and clarify the neural basis underpinning the detection of auditory emotional words on the meeting.
P2-2-162
Withdrawn
P2-2-163
聴覚刺激に対する海馬体神経細胞の膜電位応答
Auditory responses of hippocampal neurons

○坂口哲也1, 松本信圭1, 松木則夫1, 池谷裕二1
○Tetsuya Sakaguchi1, Nobuyoshi Matsumoto1, Norio Matsuki1, Yuji Ikegaya1
東京大学大学院 薬学系研究科 薬品作用学1
Laboratory of Chemical Pharmacology, Graduate School of Pharmaceutical Sciences, The University of Tokyo, Tokyo, Japan1

The hippocampal formation has a critical role in memory, learning, and spatial navigation; however, few researches have revealed how sensory information is represented in the hippocampal formation. Strong sensory stimuli, such as dazzling flashes, loud sounds, and terrible shakes, are received by sensory organs and are conveyed to the hippocampus through the thalamus and the cerebral cortex. In terms of pathology, the abnormal auditory responses of hippocampal neurons is a possible cause of the abnormality of auditory evoked potentials in schizophrenia and is thought to result in aberrant cognition. To reveal the auditory coding at the single-cell level in the hippocampal formation, we patch-clamped hippocampal CA1 and presubicular pyramidal neurons in awake mice. We found that 1) hippocampal CA1 neurons exhibited transient hyperpolarization in response to auditory monotone stimuli, 2) their responses showed little modulation by the frequency of the sound, 3) the responses were hardly observed at milder loudness and suddenly appeared upon loud sounds, 4) a prepulse stimulus immediately before the loud sound did not change the response, and 5) a presubicular neuron showed similar hyperpolarization. These results suggest that hippocampal auditory responses are insensitive to the frequency of the sound and are likely to act as an all-or-none sound detector as a function of sound pressure.
P2-2-164
ダンスゲーム運動の正確性は中側頭回および前頭極の活動と関連する
Oxygenated hemoglobin activities in the middle temporal gyrus and the frontopolar cortex correlates with enhanced performance in a dance video game play

○野本泰徳1, , 橘篤導3嶋田総太郎1, 小野弓絵1
○Yasunori Nomoto1, Jack Adam Noah2, Atsumichi Tachibana3, Shaw Bronner4, Sotaro Shimada1, Yumie Ono1
明治大院・理工・電気1, イェール大・医・精神2, 星城大・リハ・生理3, ロングアイランド大・ADAMセンター4
Grad Sch of Sci&Tech,Meiji Univ,Kanagawa1, Dept of Psychiatry, Yale Univ. School of Med.2, Fac. of Care and Rehab. (Phys.), Seijoh Univ., Aichi, Japan3, ADAM Center, Long Island Univ., NY, USA4

We used NIRS to investigate changes in regional brain activity during dance video gameplay. We focused on two areas, the middle temporal gyrus (MTG), an area of multisensory integration, and the frontopolar cortex (FPC), an area associated with multitask decision making. Subjects (21 male, 5 female) responded by pressing the correct arrow button (up, down, left and right) on a dance mat at the correct time with their foot to play the game. We studied cortical function using a block design with 30s of dance play followed by 30s of rest repeated five times, while measuring changes in oxygenated hemoglobin (oxyHb) levels. Gameplay performance was scored by the number of temporally accurate steps, which corresponded to proper arrow button press within +/-22.5ms of the correct time. All subjects showed a bell-shaped oxyHb waveform in the MTG, in which high-performance players took a longer time to reach to a peak amplitude. Elimination of a back-ground music during gameplay increased the cumulative amount of oxyHb signals in the MTG in high-performance players while decreased in low-performance players. The oxyHb waveforms in the FPC were different among players; high-performance players showed a small oxyHb increase followed by a steep and sustained decrease during the task period, while low-performance players showed box-car shaped with prolonged activation. These results suggest that the MTG plays a role in the successful integration of visual and rhythmic cues and that high-performance players adhere to integrate visual and internally-generated rhythm to make accurate steps even without external auditory rhythmic cues. The FPC is involved in processing prospective memory while multitasking and may work to compensate for insufficient integrative ability of visual and rhythmic cues in the MTG in low-performance players. The relative relationships between these cortical areas may indicate high to low performance levels when performing cued motor tasks.
P2-2-165
音脈のリセットに対する音源移動および頭部運動の効果
Effects of sound motion and head motion on the resetting of auditory streaming

○近藤洋史1, , 戸嶋巌樹1, 柏野牧夫1,4
○Hirohito M. Kondo1, Daniel Pressnitzer2,3, Iwaki Toshima1, Makio Kashino1,4
日本電信電話株式会社・ NTTコミュニケーション科学基礎研究所1, 東工大院・総合理工4
NTT Communication Science Laboratories, NTT Corporation, Atsugi, Japan1, CNRS and Universite Paris Descartes, Paris, France2, Ecole normale superieure, Paris, France3, Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, Yokohama, Japan4

Auditory scene analysis needs to parse the incoming flow of acoustic information into perceptual streams, such as distinct musical melodies or sentences from a single talker. Previous studies have demonstrated that the formation of auditory streams is not instantaneous; streaming builds up over time and can be reset by sudden changes in the acoustics of the scene. Here, we examined the effect of changes induced by exogenous sound motion and voluntary head motion on streaming. A telepresence robot in a virtual reality setup was used to disentangle all potential consequences of head motion: changes in acoustic cues at the ears, changes in apparent sound location, and changes in motor or attentional processes. The results showed that head motion, as well as sound motion, induced the resetting of two streams into one stream. An additive model analysis further revealed that resetting was strongly influenced by acoustic cues and apparent sound location rather than by non-auditory factors related to head motion. Thus, low-level changes in sensory cues can affect perceptual organization, even though those changes are fully accounted for by the head motion of the listener. We consider that our results reflect a widely distributed neural architecture for the formation of auditory streams.
P2-2-166
硬式テニス運動想像時の熟練者と初心者の脳活動の違い
Differences in brain activity between tennis experts and novices during motor imagery

○山下歩1,2, 石井信1,2, 今水寛1,3
○Ayumu Yamashita1,2, Shin Ishii1,2, Hiroshi Imamizu1,3
ATR 認知機構研究所1, 京都大院 情報 システム科学2, 情報通信研究機構・脳情報通信融合研究センター3
ATR Cognitive Mechanisms Labs, Kyoto,Japan1, Dept of SYSTEM SCIENCE Grad. Sch. of Informatics, Kyoto Univ., Kyoto,Japan2, NICT Center for information & Neural Networks, Osaka, Japan3

Many studies have revealed differences in the brain activity between sports experts and novices. For example, brain activity in expert golfers or archers during the preparation of movements is radically different from novices. Experts activate restricted motor-related regions, and novices strongly activate broad regions, suggesting that in long-term practice, experts develop a focused and efficient organization of task-related neural networks. However, previous studies mainly focused on sports dominated by closed skill such as golf. In tennis, open skill which is used to make decisions that correspond to the moves and decisions of opponents, is crucial.We investigated the brain activity of expert and novice tennis players during return (open skill) and serve (closed skill) using fMRI. Two experts (five years of experience or more), two inexperienced players (less than five years), and two novices (little) participated in our experiment. During the closed-skill tasks, they performed kinesthetic motor imagery of their own serve in a fMRI scanner while watching 10-s videos of an opponent who was waiting to make a return. During the open-skill tasks they reacted on the direction of the ball in opponent's serve and reported their reaction by pushing a button while watching the 10-s videos.When we subtracted the brain activity during the closed-skill task from the open-skill task, the activity in the middle temporal area (MT) was stronger in the expert than in the inexperienced person and the novice. Additionally, we found significant activity in the superior temporal sulcus (STS) that was only activated in the expert. Previous studies have suggested that MT is related to the processing of visual motion and that STS is related to the inference of the intentions of others. Our results suggest that differences in open skills between experts and novices mainly exist in the analysis of opponent movements and the inferences of opponent intentions based on their movements.
P2-2-167
仮現運動知覚における知覚バイアスに関連する左右半球間の神経同期の個人差
Individual differences in interhemispheric neural synchrony associated with perceptual bias in apparent motion perception

○水野佑治1,2, 川崎真弘2,3, 北城圭一1,2,3,4
○Yuji Mizuno1,2, Masahiro Kawasaki2,3, Keiichi Kitajo1,2,3,4
農工大院 工学 電子情報工学1, 理研 BSI BTCC RBIP2, 理研 BSI ABSP3, JST さきがけ4
TAT, Electronic and Information Engineering, Japan1, RIKEN BSI BTCC RBIP, Japan2, RIKEN BSI ABSP, Japan3, PRESTO, Japan Science and Technology Agency (JST), Japan4

It has been reported that neural synchrony between left and right hemispheres is associated with perception of apparent motion using an ambiguous apparent motion stimulus called Dynamical Dot Quartet (DDQ) (Rose et al., 2005). They found more enhanced gamma band neural synchrony between left and right hMT+ for horizontal motion perception than vertical motion perception. It is also known that there are individual differences in the perceptual bias estimated by the ratio of the duration of horizontal motion and vertical motion (Shimono et al., 2012). The mechanisms mediating the perceptual bias, however, are not well known. We assume that individual differences in synchrony between two hemispheres account for individual differences in perceptual bias in DDQ, and test the hypothesis by a manipulative approach using Transcranial magnetic stimulation-Electroencephalogram (TMS-EEG).
We recorded 64-channel EEG signals from 20 participants. We estimated instantaneous phase using the Gabor wavelet. We used phase synchronization index (PSI) to quantify neural synchrony between left and right hMT+ during DDQ perception. Specifically we estimated the difference in PSI between horizontal and vertical motion perception for each participant. Second single-pulse TMS were delivered to left or right hMT+ in eyes-open resting condition. We analyzed the propagation of TMS-evoked phase resetting as a measure for interhemispheric connectivity.
Significant correlation between perceptual bias and the difference in PSI between horizontal motion and vertical motion perception was observed for the left and right hMT+ electrode pair in the gamma band. We also found significant correlation between perceptual bias and interhemispheric propagation of TMS-induced phase resetting in the gamma band during the resting state.
The results indicate that individual differences in perceptual bias in apparent motion perception depend on interhemispheric connectivity mediated by neural synchrony networks.
P2-2-168
網膜への遠心性投射ニューロンの薬理学的不活性化は視覚誘導性到達運動の標的選択を可逆的に障害する
Pharmacological inactivation of the neurons centrifugally projecting to the retina reversibly impairs target selection for visually guided reaching

○内山博之1, 大野裕史1, 田口久喜1, 猪崎俊1
○Hiroyuki Uchiyama1, Hiroshi Ohno1, Hisayoshi Taguchi1, Takashi Izaki1
鹿児島大学大学院 理工学研究科 情報生体システム工学1
Dept Informatics and Biomedical Engineering, Kagoshima Univ, Kagoshima1

Neurons in the isthmo-optic nucleus (ION) of the avian midbrain tegmentum send their axons to the contralateral retina, and enhance retinal output transiently and topographically. Lesion of the ION impairs target selection for visually guided reaching (Uchiyama et al., 2012). However, the ION-lesioned birds gradually recovered from their impairments, and some birds recovered almost perfectly within two weeks. In this study, we used reversible pharmacological inactivation methods, instead of permanent lesion, to avoid suspected gradual compensation of the ION function by neural structures other than the ION. We trained Japanese quail to reach a target stimulus on a touch-screen monitor with their beak. After the training was completed, a guide cannula (o.d. = 0.46 mm, i.d. = 0.24 mm) was inserted stereotaxically into the ION, and base of the guide cannula was fixed on the skull under anesthesia. After recovery from surgery, 0.4 μl of saline or 0.05% muscimol solution was slowly injected (0.025 μl/min) into the ION through an internal cannula (o.d. = 0.20 mm, i.d. = 0.10 mm) on alternate five days. Daily performances of visually guided reaching tasks after the injection were evaluated. Injection of muscimol significantly decreased response accuracy when the target was presented simultaneously with distractors but not when presented alone, compared to saline injection. Thus, we conclude that involvement of the ION in target selection for visually guided reaching has been confirmed.
P2-2-169
Unconscious feature binding of color and orientation
○Hsin-I Liao1,2, Yung-Hao Yang2, Su-Ling Yeh2
Human Information Science Laboratory, NTT Communication Science Laboratories1, Department of Psychology, National Taiwan University2

Visual features are processed in distinctive brain areas, yet it is unknown whether visual awareness is necessary to create an integrated object representation. We investigated whether visual feature binding can be accomplished without awareness. Using the continuous flash suppression procedure, a bar with a certain color (red or green) and orientation (horizontal or vertical) was presented to one eye and interocularly suppressed by dynamic masks to the other eye. Participants were asked to detect the color bar and, if they could not see it, were forced to guess which of the four combinations of color and orientation had been presented. In E1, for those stimuli reported to be invisible, guessing responses to identify the object (i.e., both features concurrently) were significantly more accurate than an estimated value based on the assumption that the two features were identified independently, indicating unconscious feature binding. In E2, catch trials were added to estimate the sensitivity of the unseen object. Results showed that the unconscious feature binding remained while the d' was near zero, confirming that for the object that cannot be identified with its presence, its features can still be bound unconsciously. E3 examined whether the distinctive features can form an integrated object when feature binding requires spatial or temporal summation. In the suppressed eye, a gray bar (orientation) and a color blob (color) were presented distinctively, either at different spatial or temporal positions. No effect of unconscious feature binding of this sort was found. The result not only indicates a failure of unconscious feature biding that requires spatial or temporal summation, but also excludes the possibility that the observed unconscious feature binding effects in E1 and 2 are due to a guessing of the distinctive features at the response level. The overall results suggest that integrated object that binds color and orientation can be formed without awareness.
P2-2-171
Withdrawn

上部に戻る 前に戻る